Skip to content

feat(provider): add Google Vertex/AI context caching annotations#17569

Open
ccompton-merge wants to merge 2 commits intoanomalyco:devfrom
MERGE-AI-Garage:feat/google-vertex-cache-annotations
Open

feat(provider): add Google Vertex/AI context caching annotations#17569
ccompton-merge wants to merge 2 commits intoanomalyco:devfrom
MERGE-AI-Garage:feat/google-vertex-cache-annotations

Conversation

@ccompton-merge
Copy link

@ccompton-merge ccompton-merge commented Mar 15, 2026

Issue for this PR

Closes #6851
Related: #17568

Type of change

  • Bug fix
  • New feature
  • Refactor / code improvement
  • Documentation

What does this PR do?

Implements part 1 (Provider Transformation) of #6851. applyCaching() in transform.ts sets cache hints for several providers but skips Google. This adds the cachePoint annotation for @ai-sdk/google-vertex and @ai-sdk/google, matching the pattern already used by the bedrock entry.

Two changes (5 lines total):

  1. Add google: { cachePoint: { type: "default" } } to applyCaching() provider options
  2. Add an else if in message() to call applyCaching() for Google provider models

Fully backward-compatible — cachePoint is a hint the AI SDK passes to the Google provider; providers that don't support it ignore it. Parts 2-3 from #6851 (usage tracking, LLM orchestration) can follow separately.

Thanks for building OpenCode. 🙏

How did you verify your code works?

  • Verified the diff is limited to the 5 intended lines in transform.ts
  • cachePoint annotation follows the same pattern as the existing bedrock entry
  • message() condition follows the same guard-clause pattern as the Anthropic branch
  • Confirmed via AI SDK docs that @ai-sdk/google-vertex and @ai-sdk/google support cachePoint

Screenshots / recordings

N/A — no UI changes.

Checklist

  • I have tested my changes locally
  • I have not included unrelated changes in this PR

Add cache point annotations for Google providers (@ai-sdk/google-vertex
and @ai-sdk/google) in the applyCaching() and message() functions.

Currently, applyCaching() sets cache control hints for Anthropic,
OpenRouter, Bedrock, Copilot, and OpenAI-compatible providers, but
skips Google/Vertex entirely. This means Gemini models miss out on
the AI SDK's cache point signaling, which can significantly improve
implicit context caching hit rates.

Changes:
- Add google.cachePoint annotation to applyCaching() provider options
- Extend message() to call applyCaching() for @ai-sdk/google-vertex
  and @ai-sdk/google models (previously only called for Anthropic)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[FEATURE]: Google/VertexAI Context Caching

1 participant